235 research outputs found

    Advanced Techniques based on Mathematical Morphology for the Analysis of Remote Sensing Images

    Get PDF
    Remote sensing optical images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene. Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of the image. However, modeling the spatial information is not a trivial task. We addressed this problem by using operators defined in the mathematical morphology framework in order to extract spatial features from the image. In this thesis novel techniques based on mathematical morphology are presented and investigated for the analysis of remote sensing optical images addressing different applications. Attribute Profiles (APs) are proposed as a novel generalization based on attribute filters of the Morphological Profile operator. Attribute filters are connected operators which can process an image by removing flat zones according to a given criterion. They are flexible operators since they can transform an image according to many different attributes (e.g., geometrical, textural and spectral). Furthermore, Extended Attribute Profiles (EAPs), a generalization of APs, are presented for the analysis of hyperspectral images. The EAPs are employed for including spatial features in the thematic classification of hyperspectral images. Two techniques dealing with EAPs and dimensionality reduction transformations are proposed and applied in image classification. In greater detail, one of the techniques is based on Independent Component Analysis and the other one deals with feature extraction techniques. Moreover, a technique based on APs for extracting features for the detection of buildings in a scene is investigated. Approaches that process an image by considering both bright and dark components of a scene are investigated. In particular, the effect of applying attribute filters in an alternating sequential setting is investigated. Furthermore, the concept of Self-Dual Attribute Profile (SDAP) is introduced. SDAPs are APs built on an inclusion tree instead of a min- and max-tree, providing an operator that performs a multilevel filtering of both the bright and dark components of an image. Techniques developed for applications different from image classification are also considered. In greater detail, a general approach for image simplification based on attribute filters is proposed. Finally, two change detection techniques are developed. The experimental analysis performed with the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies in different fields of application when compared to other state of the art methods

    Segmentation hiérarchique d'images multimodales

    No full text
    National audienceHierarchies of partitions are widely used in the context of image segmentation. However,in the case of multimodal images, the fusion of multiple hierarchies remains a challenge. Recently, braids of partitions have been proposed as a possible solution to this issue, but have never been implemented in a practical case.In this paper, we propose a new methodology to achieve multimodal segmentation, based on this notion of braids of partitions. This new method is applied in a practical case, namely the joint segmentation of hyperspectral and LiDAR data. Obtained results confirm the potential of the proposed method.Les hiérarchies de partitions sont couramment utilisées pour segmentation d'images. Dans le cas d'images multimodales toutefois, la fusion de plusieurs hiérarchies reste un problème. Récemment, les tresses de partitions ont été proposées comme une possible solution à ce problème, mais n'ont jamais été implémentées dans un cas pratique. Nous proposons ainsi une nouvelle méthodologie, basée sur cette notion de tresse de partitions, pour effectuer la segmentation d'images multimodales. Cette méthode est appliquée dans un cas concret, à savoir la segmentation conjointe de données hyperspectrales et LiDAR. Les résultats obtenus confirment le potentiel de la méthode proposée

    Segmentation of Multimodal Images based on Hierarchies of Partitions

    No full text
    International audienceHierarchies of partitions are widely used in the context of image segmentation, but when it comes to multimodal images, the fusion of multiple hierarchies remains a challenge. Recently, braids of partitions have been proposed as a possible solution to this issue, but have never been implemented in a practical case. In this paper, we propose a new methodology to achieve multimodal segmentation based on this notion of braids of partitions. We apply this new method in a practical example, namely the segmentation of hyperspectral and LiDAR data. Obtained results confirm the potential of the proposed method

    Pansharpening of images acquired with color filter arrays

    Get PDF
    International audienceIn remote sensing, a common scenario involves the simultaneous acquisition of a panchromatic (PAN), a broad-band high spatial resolution image, and a multispectral (MS) image, which is composed of several spectral bands but at lower spatial resolution. The two sensors mounted on the same platform can be found in several very high spatial resolution optical remote sensing satellites for Earth observation (e.g., Quickbird, WorldView and SPOT) In this work we investigate an alternative acquisition strategy, which combines the information from both images into a single band image with the same number of pixels of the PAN. This operation allows to significantly reduce the burden of data downlink by achieving a fixed compression ratio of 1/(1 + b/ρ 2) compared to the conventional acquisition modes. Here, b and ρ denote the amount of distinct bands in the MS image and the scale ratio between the PAN and MS, respectively (e.g.: b = ρ = 4 as in many commercial high spatial resolution satellites). Many strategies can be conceived to generate such a compressed image from a given set of PAN and MS sources. A simple option, which will be presented here, is based on an application of the color filter array (CFA) theory. Specifically, the value of each pixel in the spatial support of the synthetic image is taken from the corresponding sample either in the PAN or in a given band of the MS upsampled to the size of the PAN. The choice is deterministic and done according to a custom mask. There are several works in the literature proposing various methods to construct masks which are able to preserve as much spectral content as possible for conventional RGB images. However, those results are not directly applicable to the case at hand, since it deals with i) images with different spatial resolution, ii) potentially more than three spectral bands and, iii) in general, different radiometric dynamics across bands. A tentative approach to address these issues is presented in this work. The compressed image resulting from the proposed acquisition strategy will be processed to generate an image featuring both the spatial resolution of the PAN and the spectral bands of the MS. This final product allows a direct comparison with the result of any standard pansharpening algorithm; the latter refers to a specific instance of data fusion (i.e., fusion of a PAN and MS image), which differs from our scenario since both sources are separately taken as input. In our setting, the fusion step performed at the ground segment will jointly involve a fusion and reconstruction problem (also known as demosaicing in the CFA literature). We propose to address this problem with a variational approach. We present in this work preliminary results related to the proposed scheme on real remote sensed images, tested over two different datasets acquired by the Quickbird and Geoeye-1 platforms, which show superior performances compared to applying a basic radiometric compression algorithm to both sources before performing a pansharpening protocol. The validation of the final products in both scenarios allows to visually and numerically appreciate the tradeoff between the compression of the source data and the quality loss suffered

    Vector attribute profiles for hyperspectral image classification

    Get PDF
    International audienceMorphological attribute profiles are among the most prominent spectral-spatial pixel description methods. They are efficient, effective and highly customizable multi-scale tools based on hierarchical representations of a scalar input image. Their application to multivariate images in general, and hyperspectral images in particular, has been so far conducted using the marginal strategy, i.e. by processing each image band (eventually obtained through a dimension reduction technique) independently. In this paper, we investigate the alternative vector strategy, which consists in processing the available image bands simultaneously. The vector strategy is based on a vector ordering relation that leads to the computation of a single max-and min-tree per hyperspectral dataset, from which attribute profiles can then be computed as usual. We explore known vector ordering relations for constructing such max-trees and subsequently vector attribute profiles, and introduce a combination of marginal and vector strategies. We provide an experimental comparison of these approaches in the context of hyperspectral classification with common datasets, where the proposed approach outperforms the widely used marginal strategy

    Joint demosaicing and fusion of multiresolution coded acquisitions: A unified image formation and reconstruction method

    Full text link
    Novel optical imaging devices allow for hybrid acquisition modalities such as compressed acquisitions with locally different spatial and spectral resolutions captured by a single focal plane array. In this work, we propose to model the capturing system of a multiresolution coded acquisition (MRCA) in a unified framework, which natively includes conventional systems such as those based on spectral/color filter arrays, compressed coded apertures, and multiresolution sensing. We also propose a model-based image reconstruction algorithm performing a joint demosaicing and fusion (JoDeFu) of any acquisition modeled in the MRCA framework. The JoDeFu reconstruction algorithm solves an inverse problem with a proximal splitting technique and is able to reconstruct an uncompressed image datacube at the highest available spatial and spectral resolution. An implementation of the code is available at https://github.com/danaroth83/jodefu.Comment: 15 pages, 7 figures; regular pape

    Unmixing-based gas plume tracking in LWIR hyperspectral video sequences

    No full text
    International audienceIt is now possible to collect hyperspectral video sequences (HVS) at a near real-time frame rate. The wealth of spectral , spatial and temporal information of those sequences is particularly appealing for chemical gas plume tracking. Existing state-of-the-art methods for such applications however produce only a binary information regarding the position and shape of the gas plume in the HVS. Here, we introduce a novel method relying on spectral unmixing considerations to perform chemical gas plume tracking, which provides information related to the gas plume concentration in addition to its spatial localization. The proposed approach is validated and compared with three state-of-the-art methods on a real HVS

    Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators

    No full text
    International audienceNonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor

    Image fusion and reconstruction of compressed data: A joint approach

    Get PDF
    International audienceIn the context of data fusion, pansharpening refers to the combination of a panchromatic (PAN) and a multispectral (MS) image, aimed at generating an image that features both the high spatial resolution of the former and high spectral diversity of the latter. In this work we present a model to jointly solve the problem of data fusion and reconstruction of a compressed image; the latter is envisioned to be generated solely with optical on-board instruments, and stored in place of the original sources. The burden of data downlink is hence significantly reduced at the expense of a more laborious analysis done at the ground segment to estimate the missing information. The reconstruction algorithm estimates the target sharpened image directly instead of decompressing the original sources beforehand; a viable and practical novel solution is also introduced to show the effectiveness of the approach
    corecore